首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4185篇
  免费   255篇
  国内免费   24篇
电工技术   38篇
综合类   60篇
化学工业   930篇
金属工艺   143篇
机械仪表   134篇
建筑科学   177篇
矿业工程   32篇
能源动力   245篇
轻工业   635篇
水利工程   23篇
石油天然气   66篇
武器工业   1篇
无线电   208篇
一般工业技术   783篇
冶金工业   309篇
原子能技术   41篇
自动化技术   639篇
  2024年   11篇
  2023年   35篇
  2022年   80篇
  2021年   146篇
  2020年   104篇
  2019年   118篇
  2018年   202篇
  2017年   174篇
  2016年   200篇
  2015年   137篇
  2014年   197篇
  2013年   452篇
  2012年   223篇
  2011年   277篇
  2010年   282篇
  2009年   235篇
  2008年   203篇
  2007年   173篇
  2006年   129篇
  2005年   127篇
  2004年   102篇
  2003年   87篇
  2002年   79篇
  2001年   36篇
  2000年   46篇
  1999年   54篇
  1998年   93篇
  1997年   61篇
  1996年   44篇
  1995年   47篇
  1994年   31篇
  1993年   22篇
  1992年   16篇
  1991年   17篇
  1990年   10篇
  1988年   11篇
  1987年   9篇
  1986年   8篇
  1985年   18篇
  1984年   17篇
  1983年   12篇
  1982年   12篇
  1981年   14篇
  1980年   14篇
  1979年   7篇
  1978年   14篇
  1977年   12篇
  1976年   18篇
  1974年   11篇
  1973年   8篇
排序方式: 共有4464条查询结果,搜索用时 46 毫秒
991.
The typical modeling approach to groundwater management relies on the combination of optimization algorithms and subsurface simulation models. In the case of groundwater supply systems, the management problem may be structured into an optimization problem to identify the pumping scheme that minimizes the total cost of the system while complying with a series of technical, economical, and hydrological constraints. Since lack of data on the subsurface system most often reflects upon the development of groundwater flow models that are inherently uncertain, the solution to the groundwater management problem should explicitly consider the tradeoff between cost optimality and the risk of not meeting the management constraints. This work addresses parameter uncertainty following a stochastic simulation (or Monte Carlo) approach, in which a sufficiently large ensemble of parameter scenarios is used to determine representative values selected from the statistical distribution of the management objectives, that is, minimizing cost while minimizing risk. In particular, the cost of the system is estimated as the expected value of the cost distribution sampled through stochastic simulation, while the risk of not meeting the management constraints is quantified as the expected value of the intensity of constraint violation. The solution to the multi-objective optimization problem is addressed by combining a multi-objective evolutionary algorithm with a stochastic model simulating groundwater flow in confined aquifers. Evolutionary algorithms are particularly appropriate in optimization problems characterized by non-linear and discontinuous objective functions and constraints, although they are also computationally demanding and require intensive analyses to tune input parameters that guarantee optimality to the solutions. In order to drastically reduce the otherwise overwhelming computational cost, a novel stochastic flow reduced model is thus developed, which practically allows for averting the direct inclusion of the full simulation model in the optimization loop. The computational efficiency of the proposed framework is such that it can be applied to problems characterized by large numbers of decision variables.  相似文献   
992.
This paper deals with a two-stage supply chain that consists of two distribution centers and two retailers. Each member of the supply chain uses a (Q,R) inventory policy, and incurs standard inventory holding and backlog costs, as well as ordering and transportation costs. The distribution centers replenish their inventory from an outside supplier, and the retailers replenish inventory from one of the two distribution centers. When a retailer is ready to replenish its inventory that retailer must decide whether it should replenish from the first or second distribution center. We develop a decision rule that minimizes the total expected cost associated with all outstanding orders at the time of order placement; the retailers then repeatedly use this decision rule as a heuristic. A simulation study which compares the proposed policy to three traditional ordering policies illustrates how the proposed policy performs under different conditions. The numerical analysis shows that, over a large set of scenarios, the proposed policy outperforms the other three policies on average.  相似文献   
993.
In this paper an analytical technique, called the optimal homotopy perturbation method (OHPM), is employed to study the nonlinear behaviour of an electrical machine modelled as a rotor supported by two journal bearings with nonlinear suspension. The dynamics of the rotor centre and bearing centre are studied and the spatial displacements in the horizontal and vertical directions are obtained. It is shown that the main strength of the OHPM is its fast convergence, since after only one iteration we obtain very accurate results for a complicated nonlinear problem, which proves that this method is very efficient in practice.  相似文献   
994.
Distributing data collections by fragmenting them is an effective way of improving the scalability of a database system. While the distribution of relational data is well understood, the unique characteristics of the XML data and query model present challenges that require different distribution techniques. In this paper, we show how XML data can be fragmented horizontally and vertically. Based on this, we propose solutions to two of the problems encountered in distributed query processing and optimization on XML data, namely localization and pruning. Localization takes a fragmentation-unaware query plan and converts it to a distributed query plan that can be executed at the sites that hold XML data fragments in a distributed system. We then show how the resulting distributed query plan can be pruned so that only those sites are accessed that can contribute to the query result. We demonstrate that our techniques can be integrated into a real-life XML database system and that they significantly improve the performance of distributed query execution.  相似文献   
995.
The incredible increase in the amount of information on the World Wide Web has caused the birth of topic specific crawling of the Web. During a focused crawling process, an automatic Web page classification mechanism is needed to determine whether the page being considered is on the topic or not. In this study, a genetic algorithm (GA) based automatic Web page classification system which uses both HTML tags and terms belong to each tag as classification features and learns optimal classifier from the positive and negative Web pages in the training dataset is developed. Our system classifies Web pages by simply computing similarity between the learned classifier and the new Web pages. In the existing GA-based classifiers, only HTML tags or terms are used as features, however in this study both of them are taken together and optimal weights for the features are learned by our GA. It was found that, using both HTML tags and terms in each tag as separate features improves accuracy of classification, and the number of documents in the training dataset affects the accuracy such that if the number of negative documents is larger than the number of positive documents in the training dataset, the classification accuracy of our system increases up to 95% and becomes higher than the well known Naïve Bayes and k nearest neighbor classifiers.  相似文献   
996.
As the application layer in embedded systems dominates over the hardware, ensuring software quality becomes a real challenge. Software testing is the most time-consuming and costly project phase, specifically in the embedded software domain. Misclassifying a safe code as defective increases the cost of projects, and hence leads to low margins. In this research, we present a defect prediction model based on an ensemble of classifiers. We have collaborated with an industrial partner from the embedded systems domain. We use our generic defect prediction models with data coming from embedded projects. The embedded systems domain is similar to mission critical software so that the goal is to catch as many defects as possible. Therefore, the expectation from a predictor is to get very high probability of detection (pd). On the other hand, most embedded systems in practice are commercial products, and companies would like to lower their costs to remain competitive in their market by keeping their false alarm (pf) rates as low as possible and improving their precision rates. In our experiments, we used data collected from our industry partners as well as publicly available data. Our results reveal that ensemble of classifiers significantly decreases pf down to 15% while increasing precision by 43% and hence, keeping balance rates at 74%. The cost-benefit analysis of the proposed model shows that it is enough to inspect 23% of the code on local datasets to detect around 70% of defects.  相似文献   
997.
Heart and vein diseases are one of the most important health problems. The number of the people who died of heart and vein diseases is more than the number of the people who died of all other health problems and natural disasters. In order to decrease this number, the intervention must be started very earlier and people must be informed about this subject. In this study, Medical Expert System has been used all the probabilities of CHD has been determined (9 symptoms, 29 = 512 different occasions) and from these probabilities, an accuracy table has been formed. This accuracy table has been simplified with the Boolean functions simplifying methods. 94 rules have been achieved by the Boolean functions minimization method. The rules achieved have formed the Expert System’s rule base. 303 patients’ values have been compared with the realized Expert System. The Medical Expert System which was formed as a result of the evaluation has evaluated men with 86.5%, women with 84.5% and in general with 86.1% accuracy rate.  相似文献   
998.
This paper presents a saddle point programming approach to compute the medial axis (MA). After exploring the saddle point properties of the medial axis transform (MAT), the mathematical programming method is employed to establish the saddle point programming model of the MAT. By using the optimal conditions, i.e., the number and distribution of the tangent points between the boundary and medial axis disk, the one- and two-dimensional saddle point algorithms are developed. In order to determine the branch point, it is better to consider its generating mechanism. Here, we identify the branch point according to the sudden changes of the solutions to the one-dimensional saddle point algorithm. Hence, all the regular and irregular points of MA can be computed by a general algorithm, and it is proved to be efficient and accurate by the numerical examples.  相似文献   
999.
This paper presents a robustly stabilizing model predictive control algorithm for systems with incrementally conic uncertain/nonlinear terms and bounded disturbances. The resulting control input consists of feedforward and feedback components. The feedforward control generates a nominal trajectory from online solution of a finite‐horizon constrained optimal control problem for a nominal system model. The feedback control policy is designed off‐line by utilizing a model of the uncertainty/nonlinearity and establishes invariant ‘state tubes’ around the nominal system trajectories. The entire controller is shown to be robustly stabilizing with a region of attraction composed of the initial states for which the finite‐horizon constrained optimal control problem is feasible for the nominal system. Synthesis of the feedback control policy involves solution of linear matrix inequalities. An illustrative numerical example is provided to demonstrate the control design and the resulting closed‐loop system performance. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
1000.
We consider the problem of similarity search in databases with costly metric distance measures. Given limited main memory, our goal is to develop a reference-based index that reduces the number of comparisons in order to answer a query. The idea in reference-based indexing is to select a small set of reference objects that serve as a surrogate for the other objects in the database. We consider novel strategies for selection of references and assigning references to database objects. For dynamic databases with frequent updates, we propose two incremental versions of the selection algorithm. Our experimental results show that our selection and assignment methods far outperform competing methods. This work is partially supported by the National Science Foundation under Grant No. 0347408.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号